


Journal reference: Computer Networks and ISDN Systems,
Volume 28, issues 7–11, p. 931.
WWW Access to Legacy Client/Server Applications
Columbia University
Tech Report #CUCS-003-96
- Abstract
- We describe a method for accessing Client/Server applications from
standard World Wide Web browsers. An existing client for the system is
modified to perform HTTP Proxy duties. Web browser users simply configure
their browsers to use this HTTP Proxy, and can then access the system via
specially encoded URLs that the HTTP Proxy intercepts and sends to the
legacy server system. An example implementation using the Oz Process
Centered Software Development Environment is presented.
- Keywords
- HTTP, client, server, legacy, proxy, Oz
Introduction
Thousands of Client/Server applications have been developed. We are
concerned with those applications that consist of specialized server
software to run on (possibly several) central computers, and specialized
client software run by each user of the system.
The World Wide Web[1] has spawned the creation of many
Web browsers, each of which is "programmable" via the common HTML standard.
These Web browsers form a platform on which software developers can build
clients for Client/Server systems. A "mediator" can be created that acts as
a gateway between the Web browser clients and the existing server. In a
Client/Server architecture where the client software does nothing more than
display and format information retrieved from the server, a World Wide Web
browser can be used as a replacement for the client software used by the
human users of the system. In other systems, where the client software
performs additional tasks on behalf of the server or the users, the mediator
can perform this work on behalf of the Web browser-based clients.
Some have used CGI programs to accomplish the tasks of the mediator[2]. A normal HTTP server is set up somewhere on
the same LAN as the existing server application. specialized CGI programs
are written to handle user requests, and the user is presented with an HTML
Forms-based interface. This is unsatisfactory for a number of reasons:
- Sessions Problem
With a CGI-based approach, it is difficult to emulate the behavior of
existing clients with respect to the server. In most Client/Server
systems, the client stays connected to the server through multiple
transmissions (i.e. a communications channel is brought up when the
client starts, and that channel is used for the duration of the client's
running time). With a CGI program, once a request is handled, the CGI
program terminates, closing down any communications channel with the
server.
- Cumbersome
In moving a large Client/Server system to the Web, many of the system's
commands need to be made available somehow for the Web users. A large
number of CGI scripts must be created and maintained in order to
accomplish this.
- Server load
Each time an HTTP server gets a request to run a CGI program, a new
process must be started on the HTTP server machine. With more than a few
clients accessing data simultaneously, a tremendous load is placed on
the HTTP server. In the standard Client/Server paradigm, each request
from a client is normally handled by the already running server.
Processes are not constantly created and then destroyed. Also, in a
typical Client/Server system, some work is done on the client side.
With a CGI-based system, all work normally done by the client must be
emulated by the CGI programs, adding more load to the HTTP server
machine.
- Slow
Because the HTTP server machine has to start a new process for each CGI
request, the web clients must sit and wait for (often unacceptable)
amounts of time for the response to each request they make. Standard
clients in a Client/Server system do not have to wait (usually) for
processes to be started before their request is processed. Also, unlike
a Web browser placing an HTTP request, clients in a Client/Server system
can send requests to the server application asynchronously, doing other
work while the request is processed. Web-based users must wait for each
request to be processed before starting another task.
Another approach, modifying the existing server software's code to "imitate"
a Web server, is effective only for very simple paradigms. It is easy to
imagine moving a database engine to the web in this fashion, where URLs
would represent queries on the database. Indeed, Oracle and other database
vendors are doing just this with some of their products[3]. Unfortunately, more interactive applications cannot
be moved to the Web easily in this fashion.
It may also be possible to use browser-specific tools like Mosaic's CCI[4] and Netscape's Plug-In APIs[5] to
create Web browser based clients for Client/Server systems. However, using
these APIs limits the use of the resulting Web-based client software to
specific platforms and specific Web browsers. This is an unnecessary
limitation which negates many of the benefits of creating a Web-based
client.
Every Web browser available today supports the use of an HTTP proxy[6] for retrieving documents on behalf of the Web
browser. These proxies are often used to give Web access to users behind an
Internet firewall or other system that restricts access to corporate
networks. When a Web browser user requests a document, rather than
connecting directly to the site specified in the document's URL, the Web
browser contacts the HTTP proxy. This proxy then fetches the document on
behalf of the Web browser, feeding the information back to it.
HTTP proxies and HTTP servers can be used to create mediators that allow Web
browser clients to access the existing server in a Client/Server
system. This approach allows for the easy migration of Client/Server
applications and their data to the World Wide Web. This mediator-based
approach does not require any changes to the existing application server
code.
This paper proceeds as follows. First, we describe the motivation for our
use of HTTP proxies as a method of connecting an existing Client/Server
system to the Web. Then, we list the requirements our method would have to
fulfill. Next, we describe the architecture we've developed for moving
existing Client/Server applications to the Web. We describe in detail one
example of using this approach, in which we describe using the architecture
to move an existing Client/Server system to the Web. Finally, we consider
some future work and extensions to the research presented here.
Motivation
By using a Web browser as the platform for client access to an existing
Client/Server system, a number of advantages are gained over the more
traditional use of specialized client software:
- Write a single client for all platforms
The creation of specialized client software allows for infinite
creativity with the user interface of a system. But this creativity comes
at a price, namely being forced to create a client for every platform your
software must run on. In the heterogenous network environments of many
institutions, this can mean creating a client for Macintosh, MS Windows,
and numerous flavors of Unix. By using a Web browser as the client for a
system, it is possible that only one set of code (the HTML and helper code)
for the system will need to be maintained. In the case of a system in which
the client must perform some platform specific functions that cannot be
emulated by the mediator, new MIME types and helper applications can be used
(see Architecture below). In this case, the
MIME helper application must be maintained on a per-platform basis, but due
to the limited functionality it must implement, this code will be much
smaller than the platform-specific code in the original specialized client
applications.
- Modularity
Once an infrastructure is in place for access to the system via Web
browsers, it becomes simple to make modifications to the system which
target it for specific users or situations. For example, our system's
workflow management capabilities have been shown working successfully in a
healthcare environment with a Web browser based interface.
- Uniformity
In an environment in which multiple Client/Server applications are in use on
a regular basis every day by a group of users, moving the system to the Web
allows the possibly disparate interfaces of the different systems to be
merged into a common display format. This is beneficial for a variety of
reasons, including lower training costs for users of the systems and easier
transitioning of users between applications.
Requirements
We identify several requirements for moving an existing Client/Server system
to the Web:
- Achieves large percentage of normal Client/Server system
functionality
In order for the new Web-based system to be useful, it must implement a
large percentage of the functionality of the original specialized
client. There may be limited portions of the system which simply cannot
be moved to a Web paradigm, but if major features of the system cannot
be implemented, moving the system to the web becomes pointless.
- Minimal changes to existing Client/Server system
The changes necessary to move the system to the Web should not require
major revisions to existing code. Existing clients and servers should
be able to communicate as before. When moving a system to which code
changes are impossible (i.e., gatewaying a commercial product for which
source code is unavailable) to the Web, the architecture must allow for
the mediator to be created with no changes to existing code.
- No changes to the HTTP or HTML standards
The architecture created must require no changes to existing Web
standards, including HTTP and HTML. Standard HTTP requests must be
handled by the mediators created to gateway requests to the existing
application server.
- Existing Web browser software must be used as is
In order to make the system available to a wide range of users, the
resulting system must allow the use of standard Web browser software
(e.g., Netscape, Mosaic, Internet Explorer, etc.)
In order to connect an existing Client/Server system to the Web while
fulfilling all the requirements stated above, we decided on an architecture
that interposes an HTTP proxy between the Web-based clients and the
existing application server (Figure 1). This HTTP proxy has been designed to intercept
HTTP requests for data and transform them into requests for the legacy
system using its native protocols.

Figure 1: Illustration of our architecture. The mediator sits
between the Web browser and the server application, gatewaying requests
between them. Normal WWW sites can be accessed as before, with the mediator
acting as an HTTP Proxy.
The HTTP proxy (or "mediator") can be run on the same machine as the
original server application or it can reside on a LAN attached to the server
machine. It does not need to run on the same computer as any of the Web
browser users, and a single mediator can be used to translate requests
from multiple Web-based users of the system. This solves the resource
problem of having one mediator per Web-based user.
There are a few different possible approaches to creating the mediator:
- Build the mediator from scratch
When adding a Web interface to a system for which source is not
available (perhaps a commercial software package) or when it is not
feasible to change the source to any piece of the system, it is possible
to build the mediator from scratch. If the protocols used between the
server and client are known, the mediator can be created as a new
"client" for the system. If the existing clients or server for the
system allow for extensions via scripting languages or the ability to
employ external tools, it may be possible to "hook in" the mediator's
HTTP proxy code with the existing client.
- Base the mediator on an existing client of the system
In this approach, an existing client for the Client/Server system is
modified to behave as an HTTP proxy server. When the new HTTP proxy
portion of the code receives a request from a Web browser client, it
simply sends the request on to the server as though it had received that
request normally.In a system where the server code is large and complex
and the client code is simpler, this approach can allow for fewer
changes to existing code.
- Modify the server code directly
In some cases, it may be possible to modify the Client/Server system's
server code to directly interact with Web browsers. This will depend
heavily on whether or not the server relies on existing clients being
"stateful", that is, remembering application state across requests. In
some systems, this may be preferable to modifying a client of the system
as the client code may actually be more complicated than the server's
code.
- Changing both the client and server
It may be preferable to base the mediator on an existing client for the
system as well as modify the server to behave differently when
interacting with the new client. This may be necessary if the server
expects each client of the system to represent only one user, since in
our architecture, a single mediator is responsible for multiple Web
based users of the system. Modifications to the server code and/or the
protocol used between the server and the new "mediator client" may be
necessary to achieve this one-to-many mapping.
Our architecture supports all of these methods of creating the mediator. In
addition, a variant of the above architecture is possible which does not use
an HTTP Proxy server. Many of the newer HTTP servers (including Netscape's
servers[7] and Apache[8])
support APIs which allow code to be linked directly into the running server.
Using these APIs, it may be possible to turn an ordinary web server into a
mediator in our architecture.
Challenges to our architecture
Several factors complicate our architecture. First, the HTTP protocol is
completely stateless, while many existing Client/Server systems use stateful
protocols. With the current HTTP specification, Web browsers connect to a
server, send a request, receive the response, and then close the
connection. The specialized clients in most Client/Server systems open a
connection to the server and then leave that connection running for the
entire lifetime of the session. In order to emulate this behavior with Web
based clients, the mediator can do several things:
- Maintain state information on behalf of every Web-based
client
It is possible to embed so-called "magic cookies" into the HTML sent
down to the Web browsers by the mediator. These cookies allow the
mediator to identify each browser independently, allowing state
information on a per-browser basis to be kept. This approach is
problematic, since it is impossible to determine when a Web browser
client of the system has crashed and restarted. In this case, the
mediator may have to abort transactions on behalf of the crashed client,
carry out its portion of some disconnection or fault tolerance protocol,
etc. Since it is impossible to determine when a client has crashed, the
mediator must pick some arbitrary means of determining this information,
perhaps basing it on idle time or some other imperfect method.
- Embed state information in the HTML sent to the Web-based
clients, storing no state information between requests
This method is an expansion on the magic-cookie based approach to
identifying Web clients. In this case, the magic cookies are generated
on a per client basis and are changed with each command the browser user
executes via the mediator. In this way, each request to the mediator
contains the state of the requesting Web client embedded directly into
it. Thus, the mediator can determine from each request the previous
requests the user has already completed successfully.
Another problem complicating our architecture is the need to have certain
HTML files served directly by the mediator. There may be URLs handled by
the mediator which do not translate into commands for the existing server.
These URLs may, for example, provide the HTML user interface for the system
or provide help information to new users of the system. To solve this
issue, the mediator maintains a small document tree out of which it serves
HTML files. When the mediator receives a request from a Web client, it must
decide if the the request is to be gatewayed to the server or if it is to be
answered with a document from the mediator's own document tree.
In addition to these complications is the serious question of how to handle
application-specific functions that the original clients handled on
behalf of the server. These include invoking external tools and using
platform specific libraries, devices, or operating system services. These
application-specific features of the existing client often cannot be
emulated by a Web browser client using only HTML. A number of techniques
can be used to solve these issues:
- Emulation by the mediator
It may be possible for the mediator to perform some of these client tasks
itself, on behalf of the Web browser clients. This is true in the case of
invoking external tools in a batch fashion, simply running them on certain
input and collecting their output. These batch tools require no
intervention by the user during their runtime. This may also be possible in
the case of invoking X-windows based interactive tools when both the
mediator and the Web-based clients are running on X displays. In this case,
the mediator starts the tool on its machine and has the tool display its
interface on the user's X display.
- New MIME types and MIME handlers[9]
In the case of tools or services that must be used on the Web client's
local computer, it is possible to create a platform-specific MIME helper
application, which is run in response to a new MIME type sent down to the
browser by the mediator. For example, it is possible to set up all Web
browser to run an application called "Foobar" when it receives an "X-Foobar"
MIME message. Since each HTTP response includes a MIME type specifier, the
mediator simply tells the browser that the response is of the "X-Foobar"
type, and the browser will then run the "Foobar" application.
- Mosaic CCI and Netscape Plug-In APIs
For better integration between the Web browser and the helper application,
browser-specific code can be written using the various APIs provided by the
different browser vendors. For example, if the system designer knows that
all Web-based users will be using Netscape's Navigator as their browser, he
or she can create a Netscape Plug-In that handles the new "X-Foobar" MIME
type sent by the mediator. These APIs allow code to be written that is
executed directly in the browser window, maintaining the seamless
integration of the Client/Server system with the Web browser.
- Java applets
Java (and other so called mobile code systems including Omniware[10] and Grow[11] )applets allow
for similar extensions as the above mentioned browser-specific APIs. Their
major advantage is that they are not browser or platform specific, and so it
is possible to create portable extensions which maintain integration between
the Client/Server system and the Web browser.
Example
The Oz system[12] is a Client/Server rule-based workflow
system currently targeted as a software development environment. The
clients for the system support the invocation of external tools to be used
during software development (i.e. the client is responsible for running the
compiling and editing tools used by the developers in the system).
Over the course of its research, our group has developed several clients for
use with the Oz system, including a TTY-based client, an Xview client, and a
Motif client. In creating our mediator, known as "OzWeb", we decided to
base our work on the TTY client. We were aiming to create a mediator that
had no user interface -- it was to run as a daemon process on a machine
connected to a LAN. While X toolkits like Xview and Motif allow for the
creation of an application without any windows, it was easier to begin with
a minimalist interface (like the one in our TTY client) and simply strip
away any pieces we didn't need.
OzWeb is conceptually broken into two parts: a standard HTTP proxy, and code
to communicate with our Oz server (see Figure 2). The HTTP portion simply receives requests
for Web documents from clients and then handles the requests, funneling the
data back to the Web browsers. The second portion, which communicates with
our application server, receives HTTP requests and responds to them either
by sending requests to our server software or by responding with HTML pages
from OzWeb's document tree.

Figure 2: Ozweb acts as a mediator between multiple Web-based
users and the application server. OzWeb can gateway requests to the server
or can serve documents out of its document tree.
In order to determine what to do with a request from a Web browser, OzWeb
looks at the URL in the request. If the URL refers to an Internet site,
OzWeb retrieves the document on behalf of the Web browser. If the URL is of
the form:
- http://oz/command name/parameter 1/parameter 2/.../parameter n/
OzWeb contacts the Oz server to perform the command on behalf of the Web
based user in the same manner that the TTY client would supply the command
and its parameters. As mentioned in the Architecture section,
certain URLs are handled by the HTTP Proxy itself, without contacting the
server application. For example the main startup screen of our system is
represented by the following URL:
http://oz/index.html
This URL presents the user with a message welcoming them to the system (Figure 3) and
sets up the display and command choices that are to be shown to the user.

Figure 3: Netscape 2.0 displaying the first screen shown a user
connecting to the Oz system via the OzWeb mediator.
When OzWeb receives a request for a document, it checks the site field, and
in this case, since the site field says "oz", it realizes that this is a
request for the Oz system (as opposed to an actual request for a document
from the Web).
In this case, OzWeb next looks to see if there is a command called
"index.html" which is designated as a command which the server should
handle. Realizing that this isn't an internal command, OzWeb attempts to
find a file called "index.html" in its document tree. In this respect, it
is acting as a mini Web server, not as a traditional HTTP Proxy. Upon
finding the file called "index.html", OzWeb sends the contents of the file
down to the Web browser and closes the connection.
The existing Oz client software relies heavily on the use of multiple
windows as well as multiple independent window panes. It is not a simple
interface to emulate. As a proof of concept, our first application of the
OzWeb Proxy was to simulate as closely as possible the existing GUI Oz
Client interface in the user's Web browser, using Netscape 2.0 Frames as the
mechanism for mimicking the existing menus, pop-up windows, etc.
The Oz server relies on its clients to invoke the tools that are used in
the course of software development. For example, when a user asks to
compile a file, the server tells the client to run the compiler on a given
source file. In the case of our system, we knew that all Web-based users
would have X display access to the system running the OzWeb mediator.
Because of this, we are able to have the mediator start the tools on behalf
of the Web clients. If a tool is interactive, we rely on X access to make
the interface appear on the user's screen (by resetting the X DISPLAY
variable).
As discussed in the >Architecture section, in
the case of a system where X display access is not possible, or where
certain external tools or system libraries must be executed on the user's
machine, it is possible to create a MIME content handler and a corresponding
new MIME type that allows small pieces of specialized code to be run on the
user's system.
The OzWeb mediator has been specifically designed so that a single instance
of OzWeb can allow multiple users to access the system via the Web. An
unlimited number of users can configure their browsers to use the OzWeb
proxy as their HTTP proxy server, and they can all then access the Oz
server. With many hundreds of users, it may be more practical to have a
number of OzWeb proxy server machines, achieving a load balancing effect.
Great care was taken in the design of the OzWeb proxy server to make sure
that NO state information is maintained by OzWeb itself. Instead, the HTML
sent down to browsers as a result of executing a command in the Oz system
is encoded in such a way as to make it possible to tell the last commands
executed by the particular client. Each URL link in the resulting HTML has a
parameter added. When the user clicks on one of these links, this parameter
is sent to the mediator along with the rest of the URL. This allows the
mediator to determine the sequence of commands the user has executed in the
past.
Related Work
Brooks et al.[15] have developed HTTP Stream
Transducers. These services, modeled as HTTP Proxy servers, allow the data
stream from a Web site to be
modified to allow for per-user markup. For example, when a user visits a
Web site they they have already seen, the Stream Transducer can add HTML to
the page to tell the user the date on which they last viewed the document.
Stream Transducers can be hooked together in sequence between the user's Web
browser and a web site in order to provide aggregate functionality.
Lotus' InterNotes[16] product uses CGI mechanisms to allow Web browser
access to documents and forms managed by the Notes Server. Documents to be
placed on the Web are pretranslated by a program that converts them to
HTML. These documents and forms are accessed through a standard HTTP server
as though they were normal HTML documents. In the case of Notes Forms, the
Submit button sends the contents of the form to a Lotus-supplied CGI program
that incorporates the data back into the Notes database. While this does
allow for some Web-based use of their system, the interaction model is
limited. Web-based users do not have access to the integrated email and
applications which standard Notes clients use.
Barta et al.[17] describe a toolkit for the
creation of
"Interface-Parasite" gateways. These gateways allow a synchronous
session-oriented tool, a telnet session for example, to be used via a Web
browser. Their toolkit requires no changes whatsoever to the source code of
the Server application, but their toolkit cannot handle an application in
which the Client can perform actions independently of the server.
Ockerbloom[18] proposes an alternative to MIME
types, called Typed Object Model (TOM), that could conceivably be employed
instead of a MIME extension
to allow the use of external tools in the client. Objects types exported
from anywhere on the Internet can be registered in ``type oracles'',
specialized servers that may communicate among themselves to uncover the
definitions of types registered elsewhere. Web clients who happen upon a
type they do not understand can ask one of the type oracles how to convert
it into a known supertype. In this way, the Web clients would not have to
be set up to handle a new MIME type. They could simply query the type
oracle, which could return information on how to run the external tools.
Conclusions and Future Work
The architecture presented here fulfills our requirements nicely. The OzWeb
mediator allows Web-based users access
to most features of the Oz system. No changes were made to the Oz Server or
to its protocol for communicating with clients. Existing TTY, Motif and
Xview clients continue to work unchanged with the Oz Server and may operate
concurrently with Web-based clients.
We were able to achieve our goal of connecting to the mediator using the
standard HTTP protocol and a standard Web browser (Netscape 2.0). Other
browsers supporting the Netscape HTML extensions have been used as well.
We have already identified several areas for future work on our
mediator-based approach to moving Client/Server systems to the Web:
- MIME extension for handling non-X, non-NFS
environments
As we have described here, it is possible to create a MIME helper
application that can perform many of the duties of the original system
clients on the Web-based user's machine. This is crucial in environments
where infrastructure like X windows and NFS cannot be used to display the
user interface of external applications and share files between the computer
running the mediator code and the computer running the Web browser.
- Handling of CORBA-based[13] Client/Server
systems
If the existing server application is CORBA-aware and has registered object
types with a CORBA ORB, it is possible to create a mediator that uses the
ORB to communicate Web-based users' requests to the application server.
Rees, et al.[14] describe a system in which HTTP
methods like GET and POST are modeled as CORBA requests, and information is
transmitted using the CORBA IIOP protocol.
- Multi server aspect of Oz and other Client/Server
systems
Oz is actually a multi-server system in which servers can communicate and
perform designated tasks on each other's data. A user can initiate the
interaction between his or her home server (the server they initially logged
into) and other servers accessible to the user. However, in Oz, the home
server performs most communications with foreign servers on behalf of the
user. We would like to experiment with creating a mediator that could
gateway HTTP requests to more than one server at a time.
References
- Tim Berners-Lee and Robert Cailliau, World Wide Web
Proposal for a HyperText Project, CERN European Laboratory for Particle
Physics, Geneva CH, November 1990, http://www.w3.org/hypertext/WWW/Proposal.html.
- Jean-Claude Mamou, ODBC and Mosaic, October 1995, http://www.w3.org/hypertext/WWW/Gateways/OQL.html.
- Oracle Corporation, Oracle WebSystem,http://www.oracle.com/.
- National Center for Supercomputing Applications, NCSA Mosaic
Common Client Interface, March 1995, http://www.ncsa.uiuc.edu/SDG/Software/XMosaic/CCI/cci-spec.html.
- Netscape Communications Corporation, Netscape Navigator
2.0 Plug-In Software Development Kit, January 1996, http://home.netscape.com/comprod/development_partners/plugin_api/index.html.
- Ari Luotonen and Kevin Altis, World-Wide Web
Proxies,First International World Wide Web Conference, Geneva CH, May
1994, http://www.w3.org/hypertext/WWW/Proxies/.
- Netscape Communications Corporation, Netscape Server API
(NSAPI), http://home.netscape.com/newsref/std/server_api.html.
- The Apache Server Group, Apache Server API, http://www.apache.org/docs/API.html.
- The Internet Engineering Task Force, MIME (Multipurpose
Internet Mail Extensions) RFC #1521, September 1993, ftp://ds.internic.net/rfc/rfc1521.txt.gz.
- Steven Lucco, Oliver Sharp, Robert Wahbe, Omniware: A
Universal Substrate for Web Programming, 4th International World Wide Web
Conference, Boston MA, December 1995, http://www.w3.org/pub/Conferences/WWW4/Papers/165.
- The GNU Project, GROW: The GNU Remote Operations Web, http://www.cygnus.com/tiemann/grow/.
- Israel Ben-Shaul and Gail E. Kaiser, "A Paradigm for
Decentralized Process Modeling, Boston MA, 1995, Kluwer Academic Publishers.
- Object Management Group, What is CORBA?, http://ruby.omg.org/corba.htm.
- Mike Beasley, Nigel Edwards, Mark Madsen, Ashley
McClenaghan, Owen Rees, A Web of Distributed Objects, 4th International
World Wide Web Conference, Boston MA, December 1995, http://www.w3.org/pub/Conferences/WWW4/Papers/85.
- Charles Brooks, Murray S. Mazer, Scott Meeks, and
Jim Miller, Application-Specific Proxy Servers as HTTP Stream
Transducers, 4th International World Wide Web Conference,
Boston MA, December 1995, http://www.osf.org/www/waiba/papers/www4oreo.htm.
- Lotus Development Corp., Lotus InterNotes Web
Publisher, September 1995, http://www.lotus.com/corpcomm/334a.htm.
- Robert A. Barta and Manfred Hauswirth,
Interface Parasite Gateways, 4th International World Wide Web
Conference, Boston MA, December 1995, http://www.w3.org/pub/Conferences/WWW4/Papers/273/.
- John Ockerbloom, Introducing Structured Data Types
into Internet-scale Information Systems, PhD thesis proposal, Carnegie
Mellon University School of Computer Science, May 1994.
About the Authors
Stephen E. Dossick
No biographical information available.
http://www.cs.columbia.edu/~sdossick/
Gail E. Kaiser
Gail E. Kaiser is a tenured Associate Professor of Computer
Science and the Director of the Programming Systems Laboratory
at Columbia University. She was as an NSF Presidential Young
Investigator in Software Engineering, and has published about 90
refereed papers in collaborative work, software development
environments, software process, extended transaction models,
object-oriented languages and databases, and parallel and
distributed systems. Prof. Kaiser is an associate editor of the
ACM Transactions on Software Engineering and Methodology and has
served on about 25 program committees. She received her PhD and
MS from CMU and her ScB from MIT.
http://www.cs.columbia.edu/~kaiser/
Research Credits
The Programming Systems
Laboratory is supported in part by Advanced Research Project Agency
under ARPA Order B128 monitored by Air Force Rome Lab
F30602-94-C-0197, in part by National Science
Foundation CCR-9301092, and in part by New York State Science and
Technology Foundation Center for Advanced Technology in High Performance Computing and
Communications in Healthcare NYSSTF-CAT-95013.
The views and conclusions contained in this document are those of the
authors and should not be interpreted as representing the official
policies, either expressed or implied, of the US or NYS government,
ARPA, Air Force, NSF, or NYSSTF.